3 · Minimal Driver Contract
See MCS Driver Contract for the detailed pseudocode.
The core MCSDriver interface is minimal:
meta: Driver metadata (ID, version, protocol, transport, etc.)get_function_description(model_name?): Returns LLM-readable function spec.get_driver_system_message(model_name?): Returns full system prompt.process_llm_response(llm_response): Parses and executes calls, returns result.
If no call is detected, return the response unchanged for chaining in process_llm_response().
The exact signatures are up to each SDK. The semantics must match.
3.1 get_function_description(model_name?)
Returns a static artifact that describes the available functions in a LLM-readable format. The approach follows a standard-first principle. If an established specification format exists, it should be used.
Standard formats like:
- OpenAPI (JSON/YAML) – for RESTful APIs
- JSON Schema – for structured input/output validation, CLI tools, or message formats
- GraphQL SDL – for GraphQL-based APIs
- WSDL – for SOAP and legacy enterprise services
- gRPC / Protocol Buffers (proto) – for high-performance binary APIs
- OpenRPC – for JSON-RPC APIs
- EDIFACT/X12 schemas – for EDI-based B2B interfaces
If no standard is available, a custom function description has to be written.
Drivers may implement/use a dynamic descriptions to tailor the spec based on the LLM’s capabilities. For example, instead of exposing a raw OpenAPI schema, the driver may generate a simplified and LLM-friendly representation that retains full fidelity but improves comprehension.
Important is that the driver can accept a standard spec, how that is treated is up to the driver.
3.2 get_driver_system_message(model_name?)
Returns a complete system message containing or referencing the function description, crafted specifically for an LLM family.
This message guides the model to make valid and parseable tool calls. While the default behavior may simply inline get_function_description(), advanced drivers can define custom prompts tailored to different LLMs (e.g. OpenAI, Claude, Mistral), including:
- Format hints
- JSON schema constraints
- Few-shot examples
- Token budget control
3.3 process_llm_response(llm_response)
Consumes the output message generated by the LLM and executes the described operation if a call is detected. The method should:
- Validate and parse the input (typically JSON or structured text)
- If a call is detected: Map the request to a bridge-compatible operation (e.g. HTTP call, MQTT message)
- Return the raw result without postprocessing (the orchestrator or client will handle formatting or retries)
This separation ensures that drivers focus on interfacing with the external system, while clients and orchestrators remain agnostic of internal logic and implementation.